**Rendering Algorithms FA24 Final Project Report** Student name: Kehan Xu Netid: F007FF0 (#) Introduction (##) Motivational image ![](../final_project_proposal/motivational_image.JPG width="300px") My motivational image features an hourglass and its reflection in calm water. The hourglass, a classic symbol of time, explicitly expresses this year's theme: "Reflection on Time." (##) Proposed features * Intel's Open Image Denoiser * Environment Map Emitter * Microfacet and Rough Dielectric BSDF * Rendering Stochastic Geometry (#) Intel's Open Image Denoiser The proposed motivational image contains reflective (water surface) and refractive surfaces (glass part of the hourglass). The rendering of this scene is expected to have a large number of fireflies due to these specular light transports, therefore a denoiser would be helpful and even necessary to get a clean final image. (##) Technical Details Intel's Open Image Denoiser is added into Darts through `CPMAddPackage`. I use the CPU version and compile it as a static library. ``` if(USE_OIDN) CPMAddPackage( URL https://github.com/RenderKit/oidn/releases/download/v2.3.1/oidn-2.3.1.src.tar.gz NAME OpenImageDenoise GIT_TAG v2.3.1 VERSION 2.3.1 OPTIONS "ISPC_EXECUTABLE /home/kehan/Tools/ispc-v1.24.0-linux/bin/ispc" "OIDN_STATIC_LIB ON" ) if(OpenImageDenoise_ADDED) get_target_property(type OpenImageDenoise TYPE) message(STATUS "Oidn library type: ${type}") list(APPEND DARTS_DEPEND OpenImageDenoise OpenImageDenoise_device_cpu) list(APPEND DARTS_PUBLIC_LIBS OpenImageDenoise OpenImageDenoise_device_cpu) ADD_DEFINITIONS (-D USE_OIDN) endif() endif() ``` The denoiser can take in either only the color buffer, or also the albedo and the normal buffers. The latter utilizes these additional per-pixel information to improve the denoising outcome. To construct these two additional buffers, for each path I record the values of albedo and normal at the first intersection, i.e., the intersection of the camera ray with the scene. (##) Validation/Rendering For a 4 SPP rendering, the result of no denoising vs. denoising with only the color buffer vs. denoising with all three buffers:
No denoising Color only Color + Albedo + Normal
The visualization of the albedo and the normal buffers: ![Albedo](cornell_box_albedo.png width="350px") ![Normal](cornell_box_normal.png width="350px") With albedo and normal information, the shape of the glass sphere is better maintained in denoising. **Code coordinates:** DartsConfig.cmake, denoiser.h/cpp, darts.cpp, image.cpp (#) Environment Map Emitter The background of the motivational image is far away and blurred. This is a perfect fit to use environment map in rendering. Not only does environment map create photorealistic background, but it also serves as both smooth (from most of it pixels) and sharp (from the few extremely bright pixels representing the sun) light sources. In our final rendering, the environment map alone is able to provide realistic light. (##) Technical Details Environment map emitters are implemented as an infinitely large sphere surrounding the scene. Previously, Darts only support area emitter, so the emission logic is strongly entangled with the surface class. However, to support various types of light source, not limited to environment map but also for point light, spot light and directional lights, I have to construct a standalone class for emitters. I create `emitter` and `emitter_group` class to define and organize emitters, in a similar fashion as surface does. Now when I do light sampling, I sample through `emitter_group` instead of `surface_group`. For BSDF sampling, since the environment map is conceptually defined as infinitely far away, I sample it when the ray has no intersection with the scene. The MIS weights are properly accounted for in both cases. Rendering with different environment maps: ![](envmap_base.png width="350px") ![](envmap_another_map.png width="350px") (###) **Importance Sampling** The importance sampling is done on the texture space instead of directly on the infinite sphere. Since the mapping from the texture to the sphere will lead to area compression near the poles, a sin weighting term needs to be multiplied when computing the probability. Within the texture space, the pixels are sampled based on their brightness. The process is essentially sampling a 2D discrete distribution. First, the total pixel brightness in each row is summed up. The rows are first sampled based on this brightness sum. Then, when the row is determined, each pixel within the row is sampled based on their individual brightness. The resulting pdf is then simply the product of the two. (###) **Rotation** Rotation of the environment map is supported. My implementation allow the rotation to be specified as a `Vec3f` with three float values representing the rotation along the X, Y and Z axes. The order of rotation is Z (the height axis) to Y to X. Rotate the environment map:
Original Rotate 180 degrees along Y
Original Rotate 10 degrees along X
**Code coordinates:** emitter.h, emitter_group.h, environment_map.cpp, image.cpp (#) Microfacet and Rough Dielectric BSDF I implemented microfacet and rough dielectric BSDFs in Darts, with support for both the Beckmann and GGX NDFs, as well as anisotropic roughness. (##) Technical Details There exists multiple versions/approximations of the smith shadowing and masking functions and I used PBRT V3 [1] as the reference implementation. Beckmann and GGX set different assumptions on the underlying microfacet normal distribution, and these assumptions affects both the `D` and `G` terms. (###) **Debugging** Microfacet and rough dielectric BSDFs can be viewed as extensions of the metal and dielectric surfaces to supported varying roughness. In other words, when we set the roughness parameter to be close to 0, these two BSDFs will fallback to metal and dielectric and leads to delta light transport. Testing with small roughness and check if the fallback is correct is helpful for debugging. When implementing rough dielectric BSDF, setting the normal direction to point inside or outside of the surface can easily cause confusion. (##) Validation/Rendering Microfacet materials with varying roughness and index of reflection parameters: The upper row has `ior = 1.5` and `roughness = 0.01, 0.05, 0.15, 0.3, 1.0`. The bottom row has `roughness = 0.15` and `1.1, 1.3, 1.5, 1.7, 1.9`.
Beckmann GGX
The comparison validates that the GGX model has heavier tails than the Beckmann model for large incident angle. Rough dielectric materials with the same varying roughness and index of reflection parameters as above:
Beckmann GGX
Anisotropic roughness: The first row has `alpha_x = 0.2` and `alpha_y = 0.002, 0.02, 0.06, 0.2, 0.3`. The second row has `alpha_x = 0.02` and `alpha_y = 0.02, 0.1, 0.2, 0.3, 0.4`. ![](microfacet_anisotropic.png width="700px") **Code coordinates:** microfacet.cpp, rough_dielectric.cpp (#) Rendering Stochastic Geometry In this section, I reimplemented step-by-step this cool paper [2], which purposes to unify the expression of microfacet surfaces and participating media, using stochastic geometry. Previously, either surface or volume are firmly encapsulated in its own appearance model, but all these theories are derived through some stochastic modelling. The authors also introduced a relatively efficient way to render such type of geometry. The stochasticity of the geometry is represented through a Gaussian process. Rendering a stochastic geometry means to compute the ensemble light transport over all realizations. Note that this is different to computing the light transport over the average geometry. ![Figure reproduced from [2]](fig_GPIS.png width="500px") (##) **Render SDF** Before defining the stochastic geometry, I first need to define the mean shape and be able to render it. Considering that stochasticity will be added to this geometry later, the mean is defined as an implicit surface instead of a mesh. More specifically, I only use SDFs as the mean shape, so that we can use sphere tracing to find the ray surface intersection. My code supports rendering meshes and SDFes in the same scene. I compute separate intersections with meshes and SDFs, and take the closer one among the two to be the real intersection. Render two spheres, the upper one is a mesh and the bottom one is an SDF: ![](two_spheres_SDF.png width="500px") Render two other different SDFs: ![Bunny](bunny_SDF.png width="350px") ![Mobius ring](mobius_SDF.png width="350px") (###) **Challenges** Rendering multiple SDFs within one scene is challenging. Combining two SDFs will still generate the correct zero level set, but the resulting function is no longer an SDF by its definition. Usually the resulting implicit surface is close to SDF in most locations, but there is no longer a guarantee that sphere tracing will give the correct intersection. In my final scene that contains two SDFs, I have to put them quite far away, so that the SDFs have limited impact on one other. (##) **Render Stochastic Implicit Surface** The sum of a SDF and a Gaussian process is no longer a SDF, so sphere tracing alone does not guarantee to give the correct intersection. Instead, a correct but computationally expensive way uses both sphere tracing and ray marching: first derive the bounds of the GP, then sphere trace the corresponding positive and negative level sets to get the interval where the intersection is certain to lie in, finally ray march within this interval to find the actual intersection. Rendering the stochastic geometry with different Gaussian process parameters:
Lower frequency, isotropic covariance kernel Higher frequency, isotropic covariance kernel Anisotropic covariance kernel
![](bunny_single_realization.png width="500px") (###) **Challenges** Ray marching is very slow and requires tweaking to find the right step size. It is a tradeoff between less computation and finding more precise intersection location. A trick I employed for faster rendering is to disable ray marching and switch back to sphere trace the base shape starting at the third bounce. The effect on the visual is minimal. (##) **Render Per-pixel Realization** Until now we are able to render SDF + one realization of the Gaussian process, which gives us one realization of the stochastic geometry. To obtain the ensemble light transport, each ray should be interacting with a different geometry. This is achieved by passing a random seed into the Gaussian process sampling process. Notice that with per-pixel realization, the object get a fuzzy appearance that is visually similar to a volume.
One global realization Per-pixel realization
(##) **Efficient Rendering** Ray marching is slow because it is costly to evaluate the value and the gradient of the 3D Gaussian process at each step. For per-pixel realization, an alternative is to evaluate only the 1D GP that is conceptually a "slice" of the 3D GP along the ray direction. This is the acceleration technique purposed in [2]. Rendering the same stochastic geometry with 3D and 1D Gaussian process sampling:
3D sampling 1D sampling
(###) **Challenges and Future Work** The darkening in 1D sampling compared to 3D is as expected; it is due to the non-consistency between neighboring ray segments at the intersection point. The solution to this is to condition the realization of the reflected ray to match the 1D gradient sampled from the previous ray. This phenomenon and the corresponding solution are described in [2] as different memory models. Otherwise, I am able to verify that the NDFs from the two sampling methods follow the same distribution. This is only visualizing the NDF of the Gaussian process, not accounting for the mean SDF:
3D sampling NDF (with corrleation) 3D sampling NDF  (without corrleation) 1D sampling NDF
**Code coordinates:** sdf.h/cpp, sdf_group.h/cpp, sdf_bunny.h, sdf_mobius.h (#) Additional Features (##) Multi-threading ``` const uint spp = m_sampler->sample_count(); const uint block_size = 32; std::map> thread_samplers; dr::parallel_for( dr::blocked_range(0, image.width() * image.height(), block_size), [&](const dr::blocked_range &block_range) { unique_ptr& thread_sampler = thread_samplers[pool_thread_id()]; if (!thread_sampler) thread_sampler = m_sampler->clone(); for (auto j = block_range.begin(); j != block_range.end(); ++j) { // TBD: 2D block size instead of 1D uint32_t y = j / image.width(); uint32_t x = j % image.width(); thread_sampler->start_pixel(x, y); Color3f color = Color3f(0.0f); for (auto i : range(spp)) { auto ray = m_camera->generate_ray(Vec2f(x + randf(), y + randf())); ray.pixel_sample_segment_idx = Vec2u(j, i); if (m_integrator) { Color3f Li = m_integrator->Li(*this, *thread_sampler, ray); color += Li; } else color += recursive_color(ray, 0); thread_sampler->advance(); } image(x, y) = color / spp; ++ progress; } } ); ``` **Code coordinates:** scene.cpp (#) Final Image ![Final image, rendered with 4096 SPP and denoised with OIDN](submission.jpg width="500px") **Techniques:** * The hourglass: * Composed of microfacet and rough dielectric materials. * The water surface: * A microfacet plane with normal mapping. * The Mobius ring: * The right half shows the mean shape SDF. * The left half is one realization of adding the Gaussian process. * The ring is blurred due to depth of field. A clearer version is attached below. * The shinny bulb on top of the hourglassLis: * Visually similar to a volume. * It is actually the ensemble light transport of a stochastic geometry with a sphere as its mean shape. ![ Mobius ring focused ](ring_clear.png width="300px") (#) Acknowledgement A big thank you to Prof. Wojciech Jarosz in helping with the stochastic geometry feature, for devoting a lot of discussion and contributing codes and SDFs in the shadertoy prototype. (##) Resources The Mobius ring SDF: https://www.shadertoy.com/view/XldSDs The hourglass and the flowers are from sketchfab. The environment map HDRI is from Poly Haven. (##) Reference [1] PBRT V3, Sec 8.4 Microfacet Models: https://www.pbr-book.org/3ed-2018/Reflection_Models/Microfacet_Models [2] Seyb, Dario, et al. 2024. From microfacets to participating media: A unified theory of light transport with stochastic geometry. ACM Transactions on Graphics (Proceedings of SIGGRAPH).